Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] a proposal to document all datasets and models #163

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

campoy
Copy link
Contributor

@campoy campoy commented Mar 9, 2018

Signed-off-by: Francesc Campoy [email protected]

@campoy
Copy link
Contributor Author

campoy commented Mar 9, 2018

Initial proposal, pretty vague for now but establishing what I think should be done as a first step.

Please provide feedback on the content, not necessarily the form or typos (we'll fix those later on)

Specially interested on @eiso and @vmarkovtsev's opinion, but feel free to provide yours too!

@vmarkovtsev
Copy link
Collaborator

This is to notify that I am here and struggling to find the time for a proper review. ETA is Friday.

@campoy
Copy link
Contributor Author

campoy commented Mar 20, 2018

Any news, @vmarkovtsev ?

@vmarkovtsev
Copy link
Collaborator

Damn I am squeezed but will do my best to review ASAP, sorry

@eiso
Copy link
Member

eiso commented Mar 21, 2018

I read through the proposal carefully and @campoy and I also discussed it in person. I am a big fan of this approach to information design. It makes it a lot clearer.

A minor comment is on the name predictors. e.g. Given input code, give me all tests that are likely to be related to it. This to me is not a prediction, it's inference.

@eiso eiso requested a review from marnovo March 21, 2018 11:58
@campoy
Copy link
Contributor Author

campoy commented Mar 21, 2018

Good point, I replaced predictor by inferencer which is probably more accurate.
Any other reviews are welcome, maybe it's time to discuss this on a meeting?

Copy link
Collaborator

@vmarkovtsev vmarkovtsev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am rejecting this proposal. It does not fit into my vision of organizing ML at source{d}. We apparently need to run this through our project workflow: design document, design meetings, kick-off meeting and finally implementation.


As we start publishing more datasets and models, it is important to keep in mind why we're doing this.

> We publish datasets because we want to contribute back to the Open Source and Machine Learning communities.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not only contributing back as it happens with code. We also want to increase popularity of MLonCode and attract more people. A dataset is always the starting point of any DS research. No data => no research.

We consider datasets and models to be good when they are:
- discoverable,
- reproducible, and
- reusable.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is meant by "reusable", it is not a typical concept. I would say that what makes a good dataset is:

  1. discoverable
  2. in-depth documented (this is different from (1)). So many times I saw cool data very poorly described. Also would be nice to see problem suggestions for newbies.
  3. accessibility. Data should be indexed if it makes sense, also the format matters. As I heard from Konstantin the other day, "just give me the darn CSVs, I am tired of fighting with Siva". If the dataset is big and targets Spark, it should not be a single gzipped txt, it should be splittable lzo chunks + a tool for reading those files without Spark. Another example: data distributed through BTSync - nobody will serve it, it is not another hot pirated movie or a cracked AAA game.
  4. reproducible. Few people really want it, but still.


It seems to be quite established that the relationship between datasets, models, and other concepts is somehow expressed in the following graph.

![dataset graph](graph.png)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"kernel" is not a typical word. It may be used in NN contexts, but in general, it is confusing. I would replace it with a "training algorithm". Usually, it trains the model, not generates it. The same thing with "inferencer", I would replace it with "application".

Awesome chart, I like it a lot!

making predictions. Problems are the starting motivation
and ending point of most of our Machine Learning processes.

Problems have a clear objective, and a measure of success that
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This oxford comma really confuses me. I was about to propose changing "let" to "lets" but then realized it was about both things.


This is normally documented in research papers, with references
to what datasets and kernels were used, as well as how much
training time it took to obtain the resulting model.
Copy link
Collaborator

@vmarkovtsev vmarkovtsev Mar 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is the thing: there is a huge difference between a research paper and what we want to achieve. Papers are always limited in size and the authors desperately try to squeeze as much information as possible. This often leads to excluding important descriptions, which are not strictly necessary but simplify the reproduction.

Think of it as a physical experiment. A paper includes: initial conditions; methodology; observations throughout the experiment lifetime; the empirical results; explanations and conclusions. Unfortunately, not ML papers.

So a dream model documentation should have:

  1. Thorough dataset documentation
  2. Problem statement. This includes choosing the right quality metric.
  3. Architecture description, metaparameter values. Random seed - this guy is so hard to get right and requires using exactly the same code.
  4. All possible plots how the model was trained. Time observations.
  5. Stability and deviations: how metaparameters influence, how stable is the training. E.g. different random seeds may lead to quite different results in case the model is buggy. Another example: we reduce the number of neurons 100x and get 1% accuracy drop: it is a fair tradeoff for many people.
  6. Results: achieved metrics, examples.

- format of the dataset
- what other datasets (and versions) were used to generate this?
- what models have been trained with this dataset
- LICENSE (the tools and scripts are licensed, but not the datasets?)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the issues are to be resolved by attaching the paper.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but we can't assume having a paper for everything. It won't be feasible from a time perspective.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely. I meant solely PGA here.

```

What are we missing?
- Versioned models, corresponding to versioned datasets.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is versioning, though not reflected in src-d/models. Models can derive, either with the relation to parent or not. E.g. it is a common situation when our data engineering and filtering are not perfect and we miss data or pass in garbage. In that case, the relation to the previous model is saved. Sometimes it is just a regular update without the relation to the previous one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the parent relation, makes @campoy's analogy to containers even strong here. There is a lot we can/should learn from how Docker registry tackled this.


What are we missing?
- Versioned models, corresponding to versioned datasets.
- Reference to the code (kernel) that was used to generate the model.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another important notice: models contain dependencies to upstream models which were used in the generation process. Datasets are also models in this terminology and should have a UUID (yeah, this is confusing, I know).

The only way which I see to reference the code is to record the whole Python package dependency tree. This still misses the actual custom calling code in many cases, and I need to apply some dark Python alchemy to discover and record it. We also need to store it somewhere in the model file.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since each model references the code it was created from. The dependency tree is there already.


Since we care about individual versioning of datasets and models,
it seems like it's an obvious choice to use a git repository per dataset,
and model.
Copy link
Collaborator

@vmarkovtsev vmarkovtsev Mar 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will die. Seriously. I tried it and it is completely out of maintenance. Special pain belongs to adding new models and being blocked for a few days until the repository is created. I am strongly against this idea.

There is also the central registry of our models in src-d/models which is of strong necessity as the only way to fetch the index and automatically download models in downstream apps.

We already have 5 models to date, and the only reason why it is so few is that we did not have data. Now that we have PGA, we will bake new models like pies, with tens of different architectures and problems. Models are not code repositories, there is no point in contributing to existing ones, it is always about adding smth new.

Besides, we need to solve the problem with the community, because we want to allow external people to push models into our registry. Think of it as DockerHub for models.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think of it as DockerHub for models.

What I believe @campoy is proposing is DockerHub for models, datasets, training algorithms, applications etc.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model is an artifact of a training algorithm. Algorithm is code and we can improve it, fix it, etc. So the algorithms should be on GitHub/Git, separate from the model storage.

Since we imagine these tools extracting information from the repositories
automatically, it is important to keep formatting in mind.

I'm currently considering whether a `toml` file should be defined containing
Copy link
Collaborator

@vmarkovtsev vmarkovtsev Mar 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will not work. All the metadata should be generated automatically from the self-contained ASDF files. Otherwise this will be a nightmare to support.

@eiso
Copy link
Member

eiso commented Mar 23, 2018

@vmarkovtsev based on your really nice/insightful feedback. I feel that you're not rejecting the proposal but wanting to amend the implementation.

I feel that @campoy's main point here is, how we build/present/communicate a mental model of how you build on top of the source{d} stack.

image

I think at this point it makes sense to have a small meeting about this. I would also like to invite our new Head of Architecture to review this proposal @smola

@eiso eiso requested a review from smola March 23, 2018 11:51
@vmarkovtsev
Copy link
Collaborator

Yes, this better explains my rationale, thanks Eiso. I really love this graph BTW, and additional ❤️ for using Graphviz to plot it.

@campoy
Copy link
Contributor Author

campoy commented Apr 4, 2018

So what should the next step be?
Should I drop this PR and follow the engineering workflow? I'm totally fine with that

@vmarkovtsev
Copy link
Collaborator

I would go with a DD (template). It is easier to discuss and fight, also this change would require actions from ML team or even Apps, depending on our depth level.

@eiso
Copy link
Member

eiso commented Apr 6, 2018

@vmarkovtsev @campoy agree with next steps and looping in @marnovo here.

@campoy
Copy link
Contributor Author

campoy commented Apr 9, 2018

FYI, I'm working on going through with this and creating a Design Document for my ideas on the topic.
I'll update this issue once I have a draft.

@campoy
Copy link
Contributor Author

campoy commented Apr 10, 2018

I wrote an initial Design Document.
For now it's very empty intentionally.

https://docs.google.com/document/d/1EbwfOd4UpVXCprW-9ApPhX-HN6PXHODVbKj4ajJtDfM/edit?usp=sharing

What do you all think?

Copy link
Contributor

@smola smola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No major comments from my side. It looks good to me.


- short description
- long description and links to papers and blog posts
- technical sheet
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might seem obvious, but we should also include:

  • date the dataset was generated
  • date of retrieved data if applicable (this might be included in the link to the original data sources)


### Versioning and Releases

Every time a new version of a dataset or model is released a new tag and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to standarize dataset version to something? Maybe just date, or semver?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

semver sounds pretty good, but it's a solution so I don't wanna decide on it just yet
I was considering using Docker hub, which would bring this for free.

@smola
Copy link
Contributor

smola commented Apr 26, 2018

@campoy
@dennwc has proposed JSON-LD and the Dataset schema to annotate datasets:
src-d/datasets#51

More info: https://developers.google.com/search/docs/data-types/dataset

It seems a nice solution instead of doing our own format and schema.

@vmarkovtsev
Copy link
Collaborator

I am back to this, finally. Reading and editing the document.

@vmarkovtsev
Copy link
Collaborator

I am frozen, there are some urgent blocking tasks in style-analyzer

@vmarkovtsev
Copy link
Collaborator

I am back to life. Actually, I have already written most of my thoughts and the document is ready for @campoy's and @marnovo's review.

@campoy
Copy link
Contributor Author

campoy commented Nov 26, 2018

Commented on the doc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants